Conversation
Extract BaseLLMClient abstract class with shared logic (message building, response parsing, fact validation, enrichCommit orchestration). Add OpenAI, Gemini, and Ollama provider implementations alongside existing Anthropic. - Add ILLMCaller interface for provider-agnostic simple completions - Refactor IntentExtractor to use ILLMCaller instead of Anthropic SDK directly - Expand LLMClientFactory with auto-detection priority chain - Add ILLMConfig to hook config for YAML-based provider configuration - Wire config-derived LLM options through DI container - Add openai and @google/generative-ai as optional peer dependencies Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> AI-Agent: Claude-Code/2.1.42
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
|
Important Review skippedThis PR was authored by the user configured for CodeRabbit reviews. By default, CodeRabbit skips reviewing PRs authored by this user. It's recommended to use a dedicated user account to post CodeRabbit review feedback. To trigger a single review, invoke the You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughAdds multi-provider LLM support: new domain LLM types and caller interface, a BaseLLMClient with shared enrichment/complete logic, provider-specific clients (Anthropic, OpenAI, Gemini, Ollama), config parsing and DI wiring for optional llm config, and refactors IntentExtractor to use the provider-agnostic ILLMCaller. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client/Hook
participant Config as Config Parser
participant Container as DI Container
participant Factory as LLMClientFactory
participant Provider as Provider Client (ILLMCaller)
participant Extractor as IntentExtractor
Client->>Config: Load hook config (optional llm)
Config-->>Container: Pass ILLMConfig
Container->>Factory: createLLMClient(options?.llm)
Factory->>Factory: detectProvider() (env/override)
Factory->>Provider: Load/init provider client (runtime)
Provider-->>Factory: ILLMCaller instance or null
Factory-->>Container: Return caller
Container->>Extractor: Inject caller
Client->>Extractor: extract(message)
Extractor->>Provider: caller.complete(systemPrompt, userMessage, maxTokens)
Provider-->>Extractor: text response
Extractor-->>Client: extracted keywords / skip
sequenceDiagram
participant Caller as ILLMCaller
participant Base as BaseLLMClient
participant SDK as Provider SDK
Caller->>Base: complete({system, userMessage, maxTokens})
Base->>SDK: callAPI(system, userMessage, maxTokens)
SDK-->>Base: IAPICallResult {text, inputTokens, outputTokens}
Base-->>Caller: Return text
Note over Base,Caller: enrichCommit uses ENRICHMENT_SYSTEM_PROMPT → parseResponse → validateFact → return facts + usage
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
There was a problem hiding this comment.
Pull request overview
This PR adds multi-provider LLM support to git-mem, expanding from Anthropic-only to support OpenAI (GPT), Google Gemini, and Ollama. The refactoring extracts common enrichment logic into a BaseLLMClient abstract class, introduces an ILLMCaller interface for provider-agnostic LLM calls, and updates IntentExtractor to use the new interface instead of coupling directly to Anthropic's SDK. The factory auto-detects providers from environment variables with configurable priority, and adds optional YAML-based configuration via .git-mem/.git-mem.yaml.
Changes:
- Extract
BaseLLMClientwith shared message building, JSON parsing, and fact validation logic - Add
ILLMCallerinterface and refactorIntentExtractorto be provider-agnostic - Implement OpenAI, Gemini, and Ollama provider clients extending
BaseLLMClient - Expand
LLMClientFactorywith auto-detection priority and addILLMConfigfor YAML-based provider configuration - Add
openaiand@google/generative-aias optional peer dependencies; Ollama uses native fetch
Reviewed changes
Copilot reviewed 23 out of 24 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| src/infrastructure/llm/BaseLLMClient.ts | New abstract base class providing shared enrichment logic for all LLM providers |
| src/infrastructure/llm/AnthropicLLMClient.ts | Refactored to extend BaseLLMClient, removing duplicate logic |
| src/infrastructure/llm/OpenAILLMClient.ts | New OpenAI (GPT) provider implementation |
| src/infrastructure/llm/GeminiLLMClient.ts | New Google Gemini provider implementation |
| src/infrastructure/llm/OllamaLLMClient.ts | New Ollama (local) provider implementation using native fetch |
| src/infrastructure/llm/LLMClientFactory.ts | Expanded to support multi-provider detection and creation with priority-based auto-detection |
| src/infrastructure/llm/IntentExtractor.ts | Refactored to use ILLMCaller interface instead of direct Anthropic SDK dependency |
| src/domain/interfaces/ILLMCaller.ts | New interface for simple LLM text completions |
| src/domain/interfaces/IHookConfig.ts | Added ILLMConfig and LLMProvider types for YAML configuration |
| src/hooks/utils/config.ts | Added parsing logic for llm configuration section |
| src/infrastructure/di/container.ts | Updated to pass llm config to createLLMClient and wire IntentExtractor with ILLMCaller |
| src/infrastructure/di/types.ts | Added llm field to IContainerOptions |
| tests/unit/infrastructure/llm/BaseLLMClient.test.ts | Comprehensive tests for shared base class logic |
| tests/unit/infrastructure/llm/AnthropicLLMClient.test.ts | Reduced to test only Anthropic-specific behavior |
| tests/unit/infrastructure/llm/OpenAILLMClient.test.ts | Unit tests for OpenAI client |
| tests/unit/infrastructure/llm/GeminiLLMClient.test.ts | Unit tests for Gemini client |
| tests/unit/infrastructure/llm/OllamaLLMClient.test.ts | Unit tests for Ollama client |
| tests/unit/infrastructure/llm/LLMClientFactory.test.ts | Expanded tests for multi-provider detection and creation |
| tests/unit/infrastructure/llm/IntentExtractor.test.ts | Refactored to use mock ILLMCaller instead of real LLM calls |
| tests/unit/hooks/utils/config.test.ts | Added tests for llm configuration parsing |
| package.json | Added openai and @google/generative-ai as optional peer dependencies |
| package-lock.json | Updated version to 0.4.0 |
| CLAUDE.md | Updated documentation with multi-provider setup instructions |
| .env.example | Expanded with examples for all supported providers |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Actionable comments posted: 9
🤖 Fix all issues with AI agents
In `@CLAUDE.md`:
- Around line 76-83: Add a blank line immediately before the fenced code block
that starts with "```yaml" in CLAUDE.md so the code block is separated from the
preceding paragraph (to satisfy markdownlint MD031); locate the block containing
"llm:" and ensure there is an empty line above the opening ```yaml fence so the
fenced code block is properly surrounded by blank lines.
In `@src/hooks/utils/config.ts`:
- Around line 128-132: Export VALID_PROVIDERS (and optionally isValidProvider)
so other modules can reuse the canonical list and validator; update the
declaration of VALID_PROVIDERS to be exported (export const VALID_PROVIDERS) and
export the isValidProvider function (export function isValidProvider) or move
both to the domain layer alongside the LLMProvider type if preferred, ensuring
imports elsewhere (e.g., LLMClientFactory, CLI validation) reference the
exported symbols VALID_PROVIDERS and isValidProvider.
In `@src/infrastructure/di/container.ts`:
- Around line 116-127: The DI registration for intentExtractor is using an
unsafe double-cast for caller (client as unknown as ILLMCaller); change the
types so no cast is required: update the llm client type exposed on the
container (container.cradle.llmClient) to satisfy ILLMCaller (either by having
the concrete ILLMClient implement/extend ILLMCaller or by typing llmClient as
ILLMCaller), then instantiate new IntentExtractor({ caller:
container.cradle.llmClient, logger: container.cradle.logger }) without casting;
ensure interface names referenced are ILLMCaller, any existing ILLMClient
implementation, and the IntentExtractor constructor signature remain consistent.
In `@src/infrastructure/llm/BaseLLMClient.ts`:
- Around line 4-9: Update the class header comment in BaseLLMClient to remove
the reference to the non-existent callComplete() and state that concrete
providers only need to implement callAPI(); reference the class name
BaseLLMClient and the method callAPI() so the documentation matches the actual
abstract API surface.
- Around line 70-105: The complete method in BaseLLMClient ignores
options.maxTokens so callers can't limit output; update the call flow to accept
and propagate maxTokens: change the abstract callAPI signature to include an
optional maxTokens parameter (e.g., callAPI(systemPrompt: string, userMessage:
string, maxTokens?: number): Promise<IAPICallResult>), update
BaseLLMClient.complete to pass options.maxTokens into callAPI, and update all
provider implementations (Anthropic/OpenAI/Gemini/Ollama) to accept and honor
the optional maxTokens when making their API requests; ensure any type/interface
definitions (ILLMCallerOptions, provider call signatures) are updated
accordingly.
In `@src/infrastructure/llm/GeminiLLMClient.ts`:
- Around line 41-49: The code is triggering ESLint rules for an explicit any and
using require(); update the optional dependency loading in GeminiLLMClient by
adding the same inline ESLint directives used in OpenAILLMClient: annotate the
GoogleGenerativeAI variable to avoid the no-explicit-any rule and wrap the
require call with a no-require-imports eslint-disable-next-line comment (and
re-enable if needed), then keep the existing try/catch that throws LLMError if
the module is absent; reference the GoogleGenerativeAI variable and the
require('@google/generative-ai') usage to locate where to add these directives.
In `@src/infrastructure/llm/LLMClientFactory.ts`:
- Around line 38-83: ESLint flags the synchronous require() calls inside
LLMClientFactory (the switch cases for 'anthropic', 'openai', 'gemini',
'ollama') as forbidden; either convert the factory to an async function and
replace each require('./XLLMClient') with await import('./XLLMClient') and then
construct the client (e.g., import { OpenAILLMClient } via await import and
return new OpenAILLMClient(...)), or keep the synchronous API and add targeted
eslint-disable-next-line comments immediately above each require with a brief
justification that these are optional peer-dependency lazy-loads; update callers
if you choose the async approach so they await LLMClientFactory.create(...) (or
equivalent factory function).
In `@src/infrastructure/llm/OpenAILLMClient.ts`:
- Around line 44-52: The code in OpenAILLMClient.ts uses a require() and an
any-typed variable which trigger ESLint rules; update the optional dependency
loading block that sets OpenAI (and uses mod) to include inline ESLint disables
for `@typescript-eslint/no-explicit-any` and
`@typescript-eslint/no-require-imports`, and move the orphaned comment from lines
~13-14 into this usage site above the try/catch; ensure the thrown LLMError
remains unchanged and reference the OpenAI variable and mod/default resolution
(mod.default ?? mod) so the linter exceptions only apply to this optional-import
pattern.
In `@tests/unit/infrastructure/llm/OpenAILLMClient.test.ts`:
- Around line 8-26: The finally block in the OpenAILLMClient.test.ts case
doesn't correctly restore the environment: change the cleanup so that after the
test the OPENAI_API_KEY is restored to its original value when saved !==
undefined, and explicitly deleted (delete process.env.OPENAI_API_KEY) when saved
is undefined; update the finally block around the new OpenAILLMClient()
assertion to mirror the Gemini test pattern and use the saved variable to decide
between restoring the value or removing the env var.
- Fix BaseLLMClient JSDoc: callComplete() → callAPI() with correct description - Improve ICommitMsgConfig.enrich JSDoc with all provider env var names - Make ILLMClient extend ILLMCaller, removing the `as unknown as` cast in DI container Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> AI-Agent: Claude-Code/2.1.42
…up (GIT-110) - Pass maxTokens through callAPI() to all providers so ILLMCaller.complete() respects the caller's token limit (e.g., IntentExtractor uses 100) - Add eslint-disable comments for dynamic require() of optional peer deps - Fix env var restoration in OpenAI test cleanup Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> AI-Agent: Claude-Code/2.1.42
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@src/infrastructure/llm/AnthropicLLMClient.ts`:
- Around line 38-60: The AnthropicLLMClient.callAPI implementation lacks
consistent SDK error wrapping; update the callAPI method in the
AnthropicLLMClient class to wrap the SDK call in a try/catch, catch any thrown
error from this.client.messages.create or subsequent processing, and rethrow a
new LLMError (including context like model, systemPrompt/userMessage summary,
and the original error) so that direct calls to complete() receive the same
LLMError shape as OpenAILLMClient and GeminiLLMClient.
In `@src/infrastructure/llm/LLMClientFactory.ts`:
- Around line 35-44: The Anthropic branch in LLMClientFactory uses a dynamic
require for AnthropicLLMClient but lacks the try/catch wrapper other providers
use; wrap the require and instantiation in a try/catch around the block that
loads './AnthropicLLMClient' and constructs new AnthropicLLMClient({ apiKey,
model: options?.model }), and on any thrown error return null (and optionally
log the error using the same logger/pattern used by other cases) so loading
failures behave consistently with the other providers.
In `@src/infrastructure/llm/OllamaLLMClient.ts`:
- Around line 91-106: The catch block in OllamaLLMClient currently wraps all
errors into a generic LLMError; modify it to detect request timeouts by checking
for an AbortError (e.g. error.name === 'AbortError' or error instanceof
DOMException) coming from AbortController.abort(), and throw a specific LLMError
like "Ollama API request timed out" (include timeout duration if available)
instead of the generic message; keep existing ECONNREFUSED handling and ensure
the new timeout branch still preserves the original error as the cause when
constructing the LLMError.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 24 out of 25 changed files in this pull request and generated 9 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
… (GIT-110) - Remove stale eslint-disable comment at top of OpenAILLMClient.ts - Use top-level import type for ILLMConfig in types.ts - Add try/catch to anthropic case in factory for consistency - Mark intentModel as reserved (not yet wired) - Fix CLAUDE.md markdown: blank line before fenced code block - Clarify test comments about optional SDK loading Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> AI-Agent: Claude-Code/2.1.42
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@src/infrastructure/llm/OpenAILLMClient.ts`:
- Around line 42-47: The local variable OpenAI is typed as `any`; change this to
a minimal structural type and use `unknown` when loading the optional SDK:
declare a minimal module shape (e.g. a type alias for the OpenAI
constructor/namespace you need) and type the variable as that shape | undefined,
then cast the result of require('openai') from unknown to that type (use
mod.default ?? mod as the typed module) instead of using `any`. Update the
symbols OpenAI and the require('openai') loading block in OpenAILLMClient.ts to
use the new minimal type and unknown cast.
- OllamaLLMClient: detect AbortError specifically for timeout messages - LLMClientFactory.test.ts: clarify test names and comments for SDK behavior Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> AI-Agent: Claude-Code/2.1.42
…110) - OpenAILLMClient: typed OpenAIConstructor replaces any - GeminiLLMClient: typed GenAIConstructor/GeminiModel replaces any - Both use unknown + cast instead of eslint-disable for no-explicit-any Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> AI-Agent: Claude-Code/2.1.42 AI-Decision: When dynamically loading optional SDKs, use structural typing instead of 'any' to maintain type safety while allowing for optional dependencies. AI-Confidence: verified AI-Tags: typescript, type-safety, optional-dependencies, sdk-loading, eslint, type-casting, code-quality, api-design, dependency-management, structural-typing, dynamic-imports, require AI-Lifecycle: project AI-Memory-Id: ee99d9b6 AI-Source: llm-enrichment
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 24 out of 25 changed files in this pull request and generated no new comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@src/infrastructure/llm/GeminiLLMClient.ts`:
- Around line 67-92: Move the GoogleGenerativeAI instantiation and the
getGenerativeModel call into the existing try block so any errors during SDK
construction or model creation are caught and rethrown as LLMError; specifically
wrap creation of new GoogleGenerativeAI(this.apiKey) and
genAI.getGenerativeModel({ model: this.model, systemInstruction: systemPrompt })
inside the try before calling model.generateContent, and keep the existing catch
that converts non-LLMError exceptions into new LLMError(`Gemini API call failed:
${message}`) so callAPI()/complete() always receive an LLMError.
In `@src/infrastructure/llm/OpenAILLMClient.ts`:
- Around line 42-65: The OpenAI SDK resolution and try/catch should be moved out
of per-call execution and cached on the OpenAILLMClient instance: add a private
field (e.g., this.OpenAI?: OpenAIConstructor) and initialize it either in the
OpenAILLMClient constructor or via a lazy initializer used by callAPI; have the
initializer perform the require('openai') logic and throw LLMError if missing,
and then have callAPI use the cached this.OpenAI to construct the client,
avoiding repeated try/catch and repeated resolution on every call.
| const genAI = new GoogleGenerativeAI(this.apiKey); | ||
| const model = genAI.getGenerativeModel({ | ||
| model: this.model, | ||
| systemInstruction: systemPrompt, | ||
| }); | ||
|
|
||
| try { | ||
| const result = await model.generateContent({ | ||
| contents: [{ role: 'user', parts: [{ text: userMessage }] }], | ||
| generationConfig: { maxOutputTokens: maxTokens ?? this.maxTokens }, | ||
| }); | ||
|
|
||
| const response = result.response; | ||
| const text = response.text() ?? ''; | ||
| const usage = response.usageMetadata; | ||
|
|
||
| return { | ||
| text, | ||
| inputTokens: usage?.promptTokenCount ?? 0, | ||
| outputTokens: usage?.candidatesTokenCount ?? 0, | ||
| }; | ||
| } catch (error) { | ||
| if (error instanceof LLMError) throw error; | ||
| const message = error instanceof Error ? error.message : String(error); | ||
| throw new LLMError(`Gemini API call failed: ${message}`); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Inspect BaseLLMClient error-handling expectations for callAPI
head -200 src/infrastructure/llm/BaseLLMClient.tsRepository: TonyCasey/git-mem
Length of output: 6670
🏁 Script executed:
# Inspect full GeminiLLMClient.ts to see context around lines 67-92
cat -n src/infrastructure/llm/GeminiLLMClient.tsRepository: TonyCasey/git-mem
Length of output: 3894
🏁 Script executed:
# Also check OpenAILLMClient.ts to see the reference error-wrapping pattern
cat -n src/infrastructure/llm/OpenAILLMClient.tsRepository: TonyCasey/git-mem
Length of output: 3581
Move SDK instantiation into the try-catch block to ensure callAPI always throws LLMError.
Lines 67-71 create the GoogleGenerativeAI client and model outside the error handling block. If instantiation fails (e.g., invalid API key), the error propagates unwrapped to callAPI() callers like complete(), breaking the error-shape contract. These calls must be guarded by the try-catch on line 73.
🔧 Proposed fix
- const genAI = new GoogleGenerativeAI(this.apiKey);
- const model = genAI.getGenerativeModel({
- model: this.model,
- systemInstruction: systemPrompt,
- });
-
- try {
- const result = await model.generateContent({
+ try {
+ const genAI = new GoogleGenerativeAI(this.apiKey);
+ const model = genAI.getGenerativeModel({
+ model: this.model,
+ systemInstruction: systemPrompt,
+ });
+ const result = await model.generateContent({
contents: [{ role: 'user', parts: [{ text: userMessage }] }],
generationConfig: { maxOutputTokens: maxTokens ?? this.maxTokens },
});🤖 Prompt for AI Agents
In `@src/infrastructure/llm/GeminiLLMClient.ts` around lines 67 - 92, Move the
GoogleGenerativeAI instantiation and the getGenerativeModel call into the
existing try block so any errors during SDK construction or model creation are
caught and rethrown as LLMError; specifically wrap creation of new
GoogleGenerativeAI(this.apiKey) and genAI.getGenerativeModel({ model:
this.model, systemInstruction: systemPrompt }) inside the try before calling
model.generateContent, and keep the existing catch that converts non-LLMError
exceptions into new LLMError(`Gemini API call failed: ${message}`) so
callAPI()/complete() always receive an LLMError.
| type OpenAIConstructor = new (opts: { apiKey: string }) => { | ||
| chat: { | ||
| completions: { | ||
| create: (params: { | ||
| model: string; | ||
| max_tokens: number; | ||
| messages: Array<{ role: 'system' | 'user'; content: string }>; | ||
| }) => Promise<{ | ||
| choices: Array<{ message?: { content?: string } }>; | ||
| usage?: { prompt_tokens?: number; completion_tokens?: number }; | ||
| }>; | ||
| }; | ||
| }; | ||
| }; | ||
| let OpenAI: OpenAIConstructor; | ||
| try { | ||
| // eslint-disable-next-line @typescript-eslint/no-require-imports | ||
| const mod: unknown = require('openai'); | ||
| OpenAI = ((mod as { default?: OpenAIConstructor }).default ?? mod) as OpenAIConstructor; | ||
| } catch { | ||
| throw new LLMError( | ||
| 'OpenAI SDK not installed. Run: npm install openai', | ||
| ); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider caching the SDK constructor at instance level.
The SDK is loaded inside callAPI, causing the try/catch to execute on every call. While Node.js caches require() results, moving the SDK resolution to constructor or a lazy-init pattern would avoid the repeated overhead.
This is a minor optimization and not blocking.
♻️ Optional refactor to cache SDK at instance level
export class OpenAILLMClient extends BaseLLMClient {
private readonly apiKey: string;
+ private readonly OpenAI: OpenAIConstructor;
constructor(options?: IOpenAILLMClientOptions) {
const apiKey = options?.apiKey || process.env.OPENAI_API_KEY;
if (!apiKey) {
throw new LLMError('OpenAI API key required. Set OPENAI_API_KEY or pass apiKey option.');
}
+ // Load SDK at construction time
+ type OpenAIConstructor = new (opts: { apiKey: string }) => { /* ... */ };
+ let OpenAI: OpenAIConstructor;
+ try {
+ // eslint-disable-next-line `@typescript-eslint/no-require-imports`
+ const mod: unknown = require('openai');
+ OpenAI = ((mod as { default?: OpenAIConstructor }).default ?? mod) as OpenAIConstructor;
+ } catch {
+ throw new LLMError('OpenAI SDK not installed. Run: npm install openai');
+ }
super(options?.model || DEFAULT_MODEL, options?.maxTokens || DEFAULT_MAX_TOKENS);
this.apiKey = apiKey;
+ this.OpenAI = OpenAI;
}🤖 Prompt for AI Agents
In `@src/infrastructure/llm/OpenAILLMClient.ts` around lines 42 - 65, The OpenAI
SDK resolution and try/catch should be moved out of per-call execution and
cached on the OpenAILLMClient instance: add a private field (e.g., this.OpenAI?:
OpenAIConstructor) and initialize it either in the OpenAILLMClient constructor
or via a lazy initializer used by callAPI; have the initializer perform the
require('openai') logic and throw LLMError if missing, and then have callAPI use
the cached this.OpenAI to construct the client, avoiding repeated try/catch and
repeated resolution on every call.
Summary
BaseLLMClientabstract class with shared enrichment logic (message building, JSON parsing, fact validation)ILLMCallerinterface and refactorIntentExtractorto be provider-agnostic (removes direct@anthropic-ai/sdkdependency)gpt-4o), Gemini (gemini-2.0-flash), and Ollama (llama3.2) provider implementationsLLMClientFactorywith auto-detection priority:GIT_MEM_LLM_PROVIDERenv > Anthropic > OpenAI > Gemini > OllamaILLMConfigto hook config for YAML-based provider/model configurationopenaiand@google/generative-aias optional peer dependencies; Ollama uses nativefetchTest plan
npm run type-check)npm run lint)npm run test:unit)npm run test:integration)npm run build)OPENAI_API_KEYand run commit withenrich: trueOLLAMA_HOSTand run commit withenrich: true🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Configuration
Documentation
Chores
Tests